natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory
constructive mathematics, realizability, computability
propositions as types, proofs as programs, computational trinitarianism
Prototype CompLF is a work-in-progress proof assistant design combining ideas from Nuprl, LF, and undecidable type checking. In this article, “CLF” and “CompLF” both abbreviate “Computational Logical Framework”. Elsewhere, “CLF” is used to abbreviate “Concurrent Logical Framework”, which is mostly unrelated. So far, CompLF is just Matt Oliveri’s unpublished formal system and a formalized metatheory for it.
The motivation for CLF is to try and combine the convenience of type systems with the flexibility of program verification, and make the resulting style of correct program construction applicable to mathematics in a general way.
In most typed computational languages—even those with rather strong type systems, like the dependent type systems of Coq and Agda—there is a distinction between types and program specifications. A specification should
Types typically constrain both behavior and implementation. Intuitively, this is because types are typically implemented using elaborate syntax checks, so they constrain behavior by constraining implementation.
A program logic provides predicates and proof rules about programs, and this provides an adequate approach to specification and verification of computational behavior. The requirement to satisfy a predicate doesn’t constrain the way a program is implemented because the syntax rules for forming a program don’t take into account any predicates the programmer may hope the program to satisfy. This is the sense in which program verification is more flexible than syntactic typing.
Using propositions as types, many dependent type systems support specifications, in addition to types. But with the restriction to syntactic type checking, types and specifications remain distinct notions in practice.
(Many proof assistants have a feature called “program extraction” that converts term definitions into an external program. Using program extraction, syntactic dependent typing of terms can match the flexibility of specifications for external programs. This doesn’t provide any additional flexibility for terms themselves, though. (The realizer extraction mechanism described below is analogous to program extraction, but with terms as the output of realizer extraction, not the input.))
By taking types to denote (certain) PERs on terms, Nuprl provides an extrinsic notion of type that subsumes program specifications. So Nuprl has a type system, and its types provide the flexibility of program verification; but it seems that Nuprl doesn’t provide the convenience of a type system with a type checking algorithm.
From a naive perspective, it can’t be done. Type checking Nuprl would be undecidable, but more importantly, it would require automatically coming up with proofs of true mathematical statements, and there’s no good way to do that.
But that just means there needs to be additional machinery for formal proofs and practical proof checking. The question is then whether the formal proof machinery is as convenient, for developing correct programs, as type checking. The answer, for Nuprl’s proof system, is supposedly “no”. CLF will try to do better, by checking proof terms in a manner similar to dependent type checking.
CLF is planned to combine a semantics similar to that of Nuprl with a language of proof terms similar to that of Cedille. (Cedille is actually the latest in a series of dependent type systems (co)designed by Aaron Stump in which the meaning of equality pertains to realizers, not proof terms.)
The terms that the CLF type system is about are called “realizers”. (Since its Nuprl-inspired PER semantics is a term realizability model.) The proof terms would be an additional class of terms. Proof checking in CLF means type checking the proof terms. In the case of “obviously-well-typed” programs, the tree structures of the realizers and proof terms should be nearly isomorphic, so the system will resemble a typical syntactic type checker. In general, proof terms have additional information not present in realizers. This is the information that is needed for proof/type checking, but is not part of the computational content of the witness/element.
CLF types only depend on realizers, never proof terms. So to handle proof checking rules analogous to certain rules of dependent type systems, a realizer needs to be extracted from a proof on the fly. (Nuprl works around such rules, since it doesn’t extract realizers on the fly.) This solution is used in RedPRL, and is discussed in Algebraic Foundations of Proof Refinement, under the terminology “dependent proof refinement”. An analogue of on-the-fly realizer extraction is used by Cedille too, and is called “erasure”. This kind of proof checking algorithm seems to be an example of an elaboration algorithm: the input expressions (in this case, proof terms) are different from the internal terms (in this case, realizers).
The only information from derivations that’s missing from proof terms is derivations of definitional equality, which is beta conversion. This is enough to make proof checking undecidable, since beta convertibility of untyped realizers is undecidable. CLF will boldly attempt to automate beta conversion by comparing normal forms nonetheless.
Before normalizing terms, to compare them for definitional equality, they are first compared for alpha-equivalence. This allows the user to safely “hand-crank” an equality proof about terms without normal forms. So CLF proof checking will not be total, but it should still be sound and complete with respect to formal derivability.
LCF-style proof engines check proofs by executing some kind “proof script”, which runs “tactics” to reduce proof goals. Tactics fail when they attempt to use an invalid rule instance. The proof is complete when all goals are reduced to nothing.
A proof term can be thought of as a restricted form of proof script. The primitive proof term formers correspond to primitive derivation rules. Open proof term schemas correspond to derived rules. Operations on closed proof terms correspond to admissible rules. Tactics are arguably somewhere between derived rules and admissible rules: they tend to intensionally analyze the goal, but not entire proofs. Intuitively, tactic engines are a kind of macro expansion system to automate the production of proof terms.
When proofs are “generated” by tactics, proof terms should not actually need to be generated in memory; only realizers. This makes CLF’s proof checking like that of Nuprl and Isabelle, and unlike that of Coq. In Coq, the proof terms need to be generated in order to perform termination checking on them. (So Coq’s trusted kernel is not LCF-style; it’s “Automath-style”: proof checking is type checking.) Nuprl-style systems don’t benefit from termination checking because it’s possible to prove that an untyped realizer implements an element of the intended type, which entails any necessary totality properties. (And Isabelle/HOL doesn’t benefit from termination checking because it uses a global choice operator.)
It’s good for performance that the (relatively complex) proof terms never need to be dealt with, aside from a single compositional pass to check them. Instead, most of the detail work of proof checking deals with realizers.
While proof terms seem too complex to be the terms you want to deal with, it may turn out that CLF realizers are too simple. The concern is that the realizers may not have enough annotations for useful forms of proof automation. But if this does turn out to be a problem in practice, it still seems better to add annotations to the realizers as necessary, rather than to give up and collapse the distinction between proof terms and realizers.
The primitives of CLF will probably be pretty low-level, and will rely on some “bootstrapping” up to the usual principles of type theory. (Some of this can be seen already, in the discussion of the rules below.) This bootstrapping will rely in crucial ways on the realizers being fundamentally untyped.
However, once the bootstrapping is taken care of, the intent is that the main way of using CLF will be a lot like other dependently-typed proof assistants: Constructions will be thought of as elements of types, and the type system will guide the details of constructing elements.
Given the goal of a system that feels typed, it may seem weird to explicitly base it on untyped realizers. The reason goes back to the flexibility of specifications, compared to purely syntactic typing. It should always be possible to construct elements in such a way that the computational content is well-implemented. But it seems that syntactic dependent typing requires ever more tricks to be added to the language in order to express elements using terms that are efficient.
Rather than having to change the language, CLF provides the option of reasoning about assignment of types to untyped realizers to prove the missing type theoretic principles. Of course, reasoning about type assignment can also be used, to taste, even when not strictly necessary. In general, there’s a spectrum of styles for using an extrinsic type theory, based on how much of the well-typedness of a construction is true by construction, versus proven afterward. Correctness by construction makes the development feel more intrinsic, but this difference in style is not a commitment: the extrinsic perspective applies to everything.
The above considerations motivate a whole class of type system implementations that combine Nuprl-style semantics with a Cedille-like proof term checker. But what about categories?
To justify the proof rules, systems like Nuprl, Cedille, and CLF are based on term realizability models. But it’s not currently clear to what extent this approach is compatible with the categorical semantics of type theory.
The specifics of CLF are motivated by the insight (of Matt Oliveri, but strongly inspired by Andromeda) that extensional type theories (with equality reflection) promise to make great logical frameworks. Although categorical semantics for “structural” fragments of Nuprl-style systems may be feasible and useful, CLF sidesteps the whole issue by being a metalogic for reasoning about formal systems.
The idea is that by regarding CLF as a metalogic, its possible lack of categorical semantics is irrelevant because the object logic can be chosen so as to reason about the desired semantics. CLF as a metalogic is a way to try to make the (hypothesized) technical advantages of a Nuprl-Cedille-like approach applicable to any type theory.
Some discussion is needed to explain what makes extensional type theory good for logical frameworks, what that has to do with metalogics, and what makes CLF a good metalogic. Later.
First, what follows is a draft version of the formal system that will be used in prototype CLF. These rules are non-algorithmic rules pertaining only to types and realizers; the details of the proof system are not designed yet. (And they won’t be until some issues are addressed in the semantics and rule set.) Some sketchy thoughts about the design of the proof system are at the beginning of the Rules section.
The main syntactic class is terms. Type expressions are terms. There are also variables and contexts.
variables (, , …) ::=
(Written as usual. Complain if you can’t tell them apart from metavariables.)
terms (, , , , , , , , …) ::=
(variable reference)
(lambda abstraction)
(application)
(equality type)
(dependent function type)
(family intersection type)
(type of computations (realizers))
(subset/separation type)
(PER comprehension type)
(type of “booleans”) (two-element type)
To be added: an infinite type.
contexts (, …) ::=
|
As usual, the rules may make liberal use of informal list operations to work with contexts.
(beta conversion)
The only beta reduction is . Conversion is the congruence closure.
The other judgment forms will be defined inductively by the rules below. They are:
(type validity)
(element in a type) (realizer of a type)
Note that in Nuprl, the () form is written (). The “extract” part is computed from a derivation, so the user thinks of the judgment as stating that the type is inhabited.
In the rules below, however, no attention is paid to what can be metatheoretically computed with terms or derivations. That is (important) future work.
() is used instead of () in order to avoid the misinterpretation of this judgment form as intrinsic typing of terms. Note the contrast with variables, which are intrinsically typed.
The () form can be translated to this system as either () or (). The latter uses the abbreviation “” defined below, and the abuse of meta-notation that we don’t care what goes in the “”. With (), we think of as resulting from the proof of . With (), we think of the “semantic judgment” () as a goal to prove internally. The rules ensure that these two ways of expressing an element of a type are interderivable. From this point on, () will be written as just (), although it remains informal.
“ equality is respected in .”:
“ members are included in .”:
“ is a subtype of .”:
Note: () ought to be logically equivalent to (), using the rules. And also to (). So thinking of types as PERs, subtypes are subrelations.
“ is a refinement/subquotient of .”:
Note the reversal of in , compared to . As relations on types, , , and ought to be preorders. is not transitive.
The Church booleans:
“Squash” of :
These rules are not the final rules that are planned to be implemented in proto CLF. The major differences planned for the final rules from those here are:
The expectation and hope is that the result will look fairly similar to algorithmic elaboration-style type checking rules. These rules will be the primitive tactics, and type checking will thus be a special case of running tactic trees.
We have a strong equality formation rule, inspired by the medium-strength rule from Anand & Rahli, which is also stronger than the usual one. Here is respect-based equality formation:
The type, which is called in Nuprl:
A PER comprehension type was originally conceived by Stuart Allen, and then forgotten and subsequently reintroduced. However, I think Nuprl had already had , subsets, and quotients, which make it definable. The real innovation is how PER comprehension and the strengthened equality formation rule interact to allow internal, logical-relations-style type definitions. We only allow forming a PER type from a pseudo-PER, rather than implicitly taking the symmetric transitive closure of any family:
The untyped beta conversion rules have the same role as Nuprl’s direct computation rules:
Every type respects itself, and . (Yes, you can use any term you like as the proof.):
A reflexivity rule, but any term can be the proof:
Terms participating in equality are elements. Normally this would only be admissible, but in CLF, it comes in handy to make it into “selectivity” rules:
Note that the above equality rules make () and () interderivable, as promised.
We use a “subsumptive” rewrite rule as equality elimination. “Subsumptive” refers to the property that the realizer of the type we rewrite in is unaffected. The terminology “subsumptive” for this is new to CLF, although this is effectively the same equality elimination rule as in Nuprl. The terminology comes from “subsumptive” vs “coercive” implementations of subtyping. Note that formally, CLF has no typed judgmental equality. Semantically, it’s an extensional type theory, but the semantic equality judgment is implemented as the equality type, like in Nuprl. This is sound due to equality reflection. So this rule is semantically a consequence of equality reflection, equality substitution, and (typed) conversion. As with equality reflection, the proof of equality is irrelevant:
Any two terms are equal as the (consequently unique) proof of a true equality:
That means too, which is thus the maximum PER, ordered by subtyping. (Well, just as soon as we make an element of .)
We have the standard application rule, and a variant of function extensionality:
Unconventionally, we consider function extensionality to be the intro rule, and derive the fact that lambdas implement functions.
Elements of a family intersection are like elements of a type, except you don’t have to apply them. An intersection element is already an element of all instances of the family. So the rules for family intersection are analogous to the rules for :
The subset type resembles an existential quantifier. In the elimination rule, the proof part must be computationally irrelevant, since an element of a subset does not include the proof. Nuprl handles this with a hidden assumption, which is the practical way to do it. But it’s also sound to just stick in a free variable side condition.
The rules for PER comprehension are made simpler by the fact that we required to be a pseudo-PER in the formation rule. The idea is that (), using the squash type constructor defined above. So comprehension and equality are almost inverse; you just lose the computational content of the equivalence proof.
Exercise: Does the following rule express (), (), or ()?
Our elimination rule is intuitively characterizing as the least type containing and . The usual elimination rule is derived; it’s derivable because Church booleans actually work.
An ex falso quodlibet rule, which is technically about booleans, because of how we defined . Note that this makes the minimum PER, ordered by inclusion, subtyping, or refinement.
The structural rules of weakening and substitution are admissible. “Strengthening” is not. Strengthening says that a variable that’s not free in the conclusion can be removed from the context:
So nonadmissibility of strengthening means that a derivation can be using a hypothesis without it being obvious from the conclusion derived. This is typical of extensional type theory. Typically it’s considered all the fault of equality reflection, which may be technically true for some systems. But Nuprl benefits more from the lack of strengthening: subsets, intersections, quotients, and other “refinement-style” type constructors provide flexible control over which proofs make it into the computational content, and which are merely establishing external facts. (Quotients don’t have to be refinement-style, but in Nuprl and CLF, they are. See below.) Computational irrelevance is implemented by literally omitting the irrelevant proofs from terms. This leads to the lack of strengthening, and to undecidable type checking. The restriction imposed by equality reflection is that equality proofs never have computational content.
CLF follows Nuprl in trying to give the user maximum control over the terms used to represent elements. The hope is that with care, this can allow significantly greater convenience and performance than with decidable dependent type systems, which are committed to retaining enough annotations to effectively reject ill-typed terms. Once proof terms are added to the formal judgments, you probably do get strengthening. (But not decidability.)
You’d think a statement should make sense before you try to prove it. And likewise, that a type should be valid before you try to construct an element. But in practice, it also works to consider a typed construction as providing evidence that the type makes sense. Specifically, the only way to derive () is if you could also derive (). That’s a “sanity check” admissible rule. It’s probably admissible for the judgments defined by the rules above.
Also, since the only way to derive type validity in this system (not using the sanity admissible rule) is to use a type former, a derivation of type validity implies that there are derivations of validity for the subexpressions of the type expression. These are “inversion” admissible rules.
For full Nuprl, it seems practically impossible to prove admissibility of sanity or inversion, if they’re not just included among the primitive rules. But they can be included among the primitive rules, because they’re true in the semantics. Due to the way hypothetical judgments are defined, making the inversion rules true requires making type equality intensional.
The above rules are the result of an experiment to reason about formal derivability in a Nuprl-style system. In the PER semantics that the above rules were based on, type equality is extensional. So the basic rules directly satisfied were more complicated, since they needed extra premises that inversion would’ve made redundant. The above rules are actually admissible rules relative to those basic rules. Sanity and inversion are “probably” admissible for the above rules because what’s actually known is that they’re admissible for the basic rules; they were in fact used to prove the above rules.
So the experiment was partially successful, but I (Matt Oliveri) don’t recommend the approach. It cannot be easily adapted to handle universes, and some of the rules above still have type validity premises that a fully semantic approach could avoid.
It may seem weird that Nuprl is based on a particular PER semantics, if you think of it as a mathematical model. But if you think of it as a proof theoretic technique that reasons about realizability instead of formal derivability, perhaps it’s more palatable.
Thinking that way, all the consequences of the PER semantics are like admissible rules, and can be added to the formal system if they’re useful. But it still seems like a good idea to try and find a stable formal system with enough rules that incompleteness is not a problem in practice.
This raises questions though: If the formal rules of a Nuprl-style system are considered admissible, what defines the system? What are the mathematical models of Nuprl-style systems? If they’re considered unnatural, should Nuprl-style systems be modified?
In my (Matt Oliveri’s) opinion, Nuprl-style systems are potentially relevant and useful to mathematics, but it’s not currently clear exactly how that might work.
Now that we’ve covered the issue that the distinction between derivable and admissible doesn’t really matter for this system, we can get to explaining particular aspects that may seem to be in particular need of an explanation.
The equality formation rule refers to the respect (“”) relation. The latter is not primitive, it’s defined in terms of equality. Is that OK? Yes: Because of the rule that all types respect , we get enough instances of equality formation to prove:
If it still seems suspiciously circular, rest assured that in the semantics, respecting computational equivalence is more basic than anything in the type system. The rule about respecting just exposes the fact internally.
Because every type respects itself, we get all the instances of equality formation from Martin-Löf type theory. Equality types beyond that are for reasoning about membership.
Taking function extensionality to be intro seems odd. Not only is it not about lambdas, it’s not apparently about elements at all; just equality. That’s (one place) where the selectivity rules are handy. The function extensionality rule is strong enough to combine with selectivity (and reflexivity) to derive elements of function type. A term is a function when you can apply it to an argument to get a result:
Proving that lambdas are functions additionally uses beta conversion. Lambdas are functions, not by fiat, but because they produce results when applied.
Note that not all functions are denoted by lambdas. At the very least, the empty function is denoted by any term you like. That is, a function type with empty domain has the same PER as . However, any function is equal to a lambda, when considered only as a function. Formally, we have
as well as (the weaker)
but not
.
With such a small set of type constructors, it may be hard to tell, but congruence rules are rarely needed as primitives. Beta conversion was already defined as an untyped congruence closure, and for typed equality, the subsumptive rewrite rule does most of the work of congruence rules. The extensionality rules for and are needed to help with binders and (our ersatz) hidden assumptions. That may turn out to handle everything, with this judgmental setup.
With many type theories , it’s cumbersome but straightforward to define collections of the formal derivations of as type families in . This provides a form of syntactic reflection of a system’s judgments within itself. There are ways of making syntactic reflection more convenient and/or powerful.
But Nuprl-style systems provide a quite different kind of judgmental reflection that we’ll call “semantic judgmental reflection”, or just “judgmental reflection” for short. The original example of judgmental reflection, and the basis of the name, is the equality reflection rule of extensional type theory (ETT), which makes the “identity” type family a reflection of the meaning of the element equality judgment of the same system. In a sense, this is still the only judgmental reflection principle in Nuprl-style systems. (But the type family is called “equality”.) The difference is in the interpretation of the equality judgment.
There are two styles of interpretation for ETT: intrinsic and extrinsic. Extrinsic is the original, since an example is Martin-Löf’s meaning explanation, which was the original justification for equality reflection. Intrinsic seems to have become the default one. It’s the style considered in Martin Hofmann’s thesis, for example. But Nuprl and CLF commit to the extrinsic style.
An irony is that by considering terms extrinsically typed, you consider them intrinsically meaningful. In Nuprl and CLF, following Martin-Löf, terms have an intrinsic computational meaning. The role of the Nuprl-style type system is to reason about that. (Remember: this can all be considered a substitute for admissible rules! …Probably.)
There is actually no interpretation necessary, for terms in general. The type system can be layered over the terms as they are, making use of an operational model of their computational meaning. (A denotational semantics would probably work too.) The details need not concern us here, but some things are important:
All of these points are important to the design of a Nuprl-style system. But the one that seems to distinguish the approach from intrinsic typing is that being an element is a special case of equality. In the intrinsic approach, equality is only meaningful when applied to two terms known to have the same type. So being an element must be somehow prior to equality. But with the judgment of being an element as a special case of equality, it’s simultaneous.
A lot of the power of judgmental reflection is that it can be understood as reasoning about the computational content of the language as it actually is. But since it’s probably not necessary to interpret Nuprl-style systems as being about their own syntax, we should speak of “computations”, not “closed terms”. In CLF, however they’re interpreted, computations are axiomatized as the elements of . (And actually, it may be that types need not be computations.) Types are PERs on , and are accessed by the equality judgment form, which is reflected by the equality type constructor. So you use equality types to reason about equality in types, and also membership of computations in types.
So then what’s with all this respect (“”) stuff in the equality formation rule? If terms have an intrinsic computational meaning, and equality reasons about it, shouldn’t the rule be just:
?
No, that doesn’t work. The extra complexity has to do with making the axiomatization work for open terms. Consider the type . Then . But then consider (). The simple formation rule would derive it, but it doesn’t make sense, because () and () are different types, although and are the same . The problem is that equality of types and elements is generally different from equality of computations, and we want well-typed open terms to respect equality of elements. The respect-based equality formation rule doesn’t derive the bad type family because you’d need to find some type such that () and (). But if (), then considers and equal, while if (), then it doesn’t.
That variables and other open terms cannot generally be treated as computations is the sense in which variables are intrinsically typed: although elements are all realized by computations, an arbitrary element generally cannot be used as a computation, so reasoning about membership is constrained.
Based on the ability to reason about membership, in addition to equality, other types can be defined that seem like semantic judgments. For example, , , , and were defined above. But what kinds of things are these semantic judgments in general?
A lenient answer is that semantic judgments are type (families) such that any two elements of are equal ( is a subsingleton), and () is inhabited. That is, the squash-stable subsingletons. All such types are isomorphic to equality types, and are the regular subobjects, if you consider a Nuprl-style system as a category. (In a manner analogous to ETT, using inhabitedness of equality types to determine equality of morphisms.)
A stricter definition of semantic judgments is that they are type (families) such that (). Types formed with equality, squash itself, , , , and are always judgments in this strict sense. These types are extensionally equal to equality types (for example, this one: ) in the following sense:
This definition of extensional equality is motivated by the “subsumption” (derived) rule:
So gives us subsumptions in both directions. Extensionally equal types denote the same PER.
Thus, the semantic judgments in the strict sense are regular subobjects defined in a convenient way so that they’re inhabited if and only if they’re basically . is itself a semantic judgment. Also:
However, () is generally weaker. For example, and have the same members (computations) but are not extensionally equal, or even isomorphic.
We define propositional truncation as quotienting a type down to a subsingleton:
Unlike the squash of a type (), has the same set of computations as . So from a computational perspective, a member of is just as useful as a member of . But well-typed computations using a member of are not allowed to depend on the identity of the member. So for example, a map is a map such that for all , . So a map out of is constant, but that’s still more permissive than a map out of , which doesn’t have access to a member of .
Let
By non-contradiction, is a subsingleton, so . The member of , if you can get ahold of it, tells you whether is inhabited. On the other hand, a member of tells you nothing: its existence means that is either inhabited or not, but you still don’t know which. A member of also tells you nothing, and even its existence is uninformative: you can prove yourself, without knowing anything about .
There are implications and , going from more useful subsingletons to less useful subsingletons. There generally aren’t implications going in the other direction. You can sort of think of , , and as truncating down to constructive, intuitionistic, and classical propositions, respectively. Here, the distinction between “constructive” and “intuitionistic” is that constructive logic is explicitly computational, whereas intuitionistic logic merely declines to assume anti-computational principles, like excluded middle. (So for example, constructive set theory is merely intuitionistic by this terminology, though it has constructive models.) (None of this has to do with intuitionism.)
It should be consistent to add the rule:
This makes the semantic judgments classical, rather than intuitionistic. Constructive propositions remain constructive, and canonicity is not affected.
In contrast, having imply is not known to be consistent, and it ruins canonicity. (It’s quite possible that it’s consistent with the current rules, but this is pure luck. In full Nuprl, the same implication is probably false, due to “partial” types.)
Due to the lack of strengthening, a funny thing happens when we consider the non-dependent special case of dependent connectives. Consider conjunction (), which we defined as a special case of subset:
From the formation rule for subset, we derive the following formation rule for conjunction:
That is:
With strengthening, this would be basically the same as:
But without being able to strengthen the premises of to those of , itself is generally stronger than . Specifically, only needs to be a type if is inhabited. So is a short-circuiting conjunction, kind of like the &&
boolean operator from many programming languages, where the right conjunct need only be able to return a boolean when the left conjunct is true. (Because that’s the only case in which it’s evaluated.)
Implication is similarly short-circuiting, like a conditional expression from a programming language:
Short-circuiting disjunction doesn’t seem natural for types; what is a dependent coproduct? But if the left disjunct is a , there’s always (). only needs to be a type if is . Given an element of (), by branching on , we get either () or .
Combining () with the primitive elimination rule for , you get this alternative elimination rule, where the motive is a family on :
Curiously, you can go in the other direction too, and get both of the original rules from this one.
To get the usual elimination rule, prove that Church booleans are self-eliminating functions:
This is reasoning about membership, so we really do want to know that s are s. The usual elimination rule then just applies at that function type.
In Nuprl, it’s often possible to get induction rules without a premise showing that the motive is a valid family. The idea is that for sufficiently concrete types, validity of the motive can be proven pointwise, and moreover the cases of this proof are actually implied by the cases of the main induction.
In the current CLF rules though, it doesn’t seem possible to derive such rules. Also, including such strengthened induction rules as primitive doesn’t seem compatible with the current hybrid approach based on admissible formal rules: the sanity check would require an unusual type validity rule, which would ruin the inversion lemma. Basically the Nuprl-style justification for omitting the motive premise is “too semantic” for the current hybrid approach.
Subtyping variance rules should be derivable for the type constructors. Here are some for the primitive type constructors:
Yes, contravariance for function domains. A difference from subsetting in material set theory. Family intersections are completely analogous:
Subset is covariant in both positions:
This rule can be strengthened, since is proof irrelevant:
The intersection rule can be strengthened too, since the “argument” is irrelevant, but it’s not as pretty, because the type dependency is not irrelevant:
We have seen that in CLF (as in Nuprl) whether two terms are equal depends on what type we compare them at, and that we need to care about whether one type respects () another’s equality. This gives rise to three different () basic () preorders () on types, whereas material set theory has just one. What does all this extra complexity buy us?
Very roughly, it buys us the ability to reason directly using realizers, rather than some other system of terms that denote mathematical objects that are realizable. Intuitively, the realizers are elements of , and are fairly intensional compared to most types of mathematical object. But the terms are still written as realizers, no matter the type.
More specifically, it’s an aspect of how quotients work that seems to be the root cause of the difference between set theory and “PER theory”. PER semantics provides refinement-style quotients: there are no operations needed for mapping into or out of quotients. Quotienting has zero overhead in terms, since terms are realizers, and computationally, quotients don’t change anything. It’s up to the semantics of the type system to make quotient types work correctly anyway.
Eventually, general subquotients should be definable in CLF as:
(Of course, this definition can already be written, but it doesn’t currently seem to have all the desired properties, due to missing rules.)
This is using the short-circuiting behavior of conjunction to allow deriving a formation rule something like:
So we expect be a family on a pair of s, which means that equality is respected in , and therefore in . And by definition, a member of is a member of . That is, () and (), so therefore ().
In case is also reflexive for members of : (), then we also have (). Combining with respect, we get (). In this case is a (full) quotient of , and () realizes the quotient projection mapping from to .
Taking stock, with a pseudo-equivalence relation on , we have:
Since we have member inclusion in both directions, according to set theory, this “quotient” of is not actually any different from . But to PER theory, it’s the same set of realizers, but with a possibly coarser equality. We can see the difference with our old friend, . is a singleton, but is not.
In set theory, the intuition behind quotients is that a quotient of is a set of equivalence classes of members of . So and its quotient generally have different members. The effect on equality is a consequence of set extensionality. With PERs, this intuition is superficially contradicted by the semantic judgments, so it should be used with care, or not at all.
In programming, programmers informally use quotients, probably mostly without realizing it. For example, when using hash tables to implement finite maps, the representation has extra implementation details like the number of buckets, and the assignment of keys to buckets. Any meaningful results given to the user should not depend on this information. That doesn’t mean that the computation of the results should not depend on it, but that the results should not vary due to changes in implementation details. Programmers know that, so they effectively treat hash tables as quotient types where implementation details have been modded out. And no operations on equivalence classes of hash tables will be found in the program; programming languages provide no such thing. So the programmer’s quotient is not the set theoretic quotient construction, it’s the refinement-style quotient codified by PER theory.
(What seems to be missing from the current rules is a way to prove that you can map out of a quotient by mapping out of the base type in a way that respects the equivalence relation. For example, to allow using a hash table correctly by proving that only implementation details of the result vary with implementation details of the hash table.)
However, for many purposes, it may be a net improvement to give up on refinement-style quotients in order to get the simplicity of set theory. Refinement-style quotients seem particularly well suited to CLF’s use as a metalogic, though.
Logical frameworks provide a formal notation for languages. (Especially languages that support logical deduction.) The kind of logical framework CLF aims to provide is one that represents dependent type systems using HOAS signatures.
The logical framework “LF” uses HOAS, and can already represent seemingly any language, including dependent type systems. But the issue is not only whether a language can be represented at all, but how “deep” the encoding is. With a shallow encoding, the framework representation of encoded expressions of the language is close to a custom representation tailored to that language. With a deep encoding, a lot of overhead is introduced by the encoding, making the framework representation much less convenient than a custom one.
LF’s use of HOAS makes the encoding of many languages shallower, but only works for languages that can be expressed using natural-deduction-style rules. (That includes the degenerate case of languages with no binding forms. That is, combinator languages.) So to encode substructural languages, for example, evidence of substructural variable use needs to be handled explicitly, unlike with a custom representation.
There are two styles of HOAS-based encoding associated with LF, called “synthetic” and “analytic”. (See the logical framework page.) Synthetic is a deeper style of encoding than analytic, but analytic only works for languages where type equality is just alpha equivalence.
The formal description of a language in LF and similar logical frameworks is called a “signature”, since it’s similar to a signature for some fixed logic.
To see how equality reflection is useful for a logical framework, we will propose an extension of LF, called “LF=” (LF with equality), that combines the shallowness of LF’s analytic encoding with more of the flexibility of LF’s synthetic encoding. (But in LF=, the encoding is generally not analytic, in the sense of algorithmic judgments. That’s the catch.) But first, it’s instructive to cover analytic encoding in Martin-Löf’s logical framework (“MLLF”).
Unlike LF, MLLF doesn’t have a fixed judgmental equality. Term formers are specified in roughly the same intrinsically typed way as with the analytic encoding for LF. But meanwhile, judgmental equality rules for the object language are added by adding corresponding judgmental equality rules to the framework. Thus, MLLF can shallowly encode many type systems with analytic judgments, not just the ones where type equality is very simple, like with LF. MLLF is particularly well suited to specifying variants of Martin-Löf type theory.
MLLF is defined and discussed in Martin-Löf’s Analytic and Synthetic Judgements in Type Theory and in part III of Programming in Martin-Löf’s Type Theory. The former reference also discusses the notions of analytic and synthetic judgment, while the latter makes it perhaps clearer how the “theory of types” is intended to be used as a logical framework. Martin-Löf’s main theory of mathematical collections is the “theory of sets”, which is specified using the logical framework.
There are tools that provide a style of language specification very similar to MLLF. Dedukti implements a logical framework similar to LF, but signatures can also include typed rewrite rules that extend judgmental equality. There’s an Agda feature that allows proven or hypothesized equations to be declared as rewrites that extend judgmental equality. This can presumably be combined with axioms to imitate Dedukti signatures, and quite possibly more.
These tools are designed under the assumption that, while open-ended, judgmental equality remains analytic. That is, it’s something that you can sanely just check automatically. (Or so the story goes.) Trying to add extensional type theory’s equality reflection rule to a rewrite engine is not going to work. It generally won’t have a good way to solve the premise that proves the equation. (Either the system will give up, or do something useless.)
The equality reflection rule is enough to make the judgments synthetic. That is, something you generally have to prove. To handle type systems with synthetic judgments, you either need to use a synthetic encoding (for example, in LF or Isabelle), or use a tool designed to handle synthetic judgments, like Nuprl, MetaPRL, Andromeda, or (eventually, hopefully) CLF.
Like LF, and unlike MLLF, LF= has a fixed judgmental equality. Unlike either LF or MLLF, this judgmental equality is synthetic, because LF= has equality reflection. (LF=’s typing judgment form is synthetic as well, for the usual reason that it makes use of judgmental equality.)
LF= is LF plus a reflective equality type constructor. (Combined with LF’s eta equality for function types, this gets function extensionality.) But still no equality kinds. LF is often formulated such that only canonical forms (beta normal, eta long) are well typed. This probably does not work for LF=; use your favorite way of formulating extensional type theory. (The details don’t matter here, since I’m going to propose CLF as a generalization of LF=.)
To encode an intrinsically typed language in LF=, you can declare intrinsically typed constants/axioms in LF=, much like LF’s analytic encoding:
tp : Type
el : tp -> Type
Pi : {A:tp} (el A -> tp) -> tp
lam : {A:tp} {B:el A->tp} ({a:el A} el (B a)) -> el (Pi A B)
app : {A:tp} {B:el A->tp} el (Pi A B) -> {a:el A} el (B a)
Like in MLLF, the object language’s judgmental equality is identified with the framework’s judgmental equality. This means you don’t need to encode congruence rules. Interesting equality rules are encoded by assuming an equation:
pi_beta : {A:tp} {B:el A->tp} {b:{a:el A} el (B a)} {a:el A} app A B (lam A B b) a = b a
pi_eta : {A:tp} {B:el A->tp} {f:el (Pi A B)} f = lam A B ([a] app A B f a)
Unlike MLLF, these axioms are just additional constants; there is no restriction other than that they be well typed declarations in LF=. Although these equations are just constants, by using equality reflection to use them, no coercions appear in the terms. If using LF= equations added a coercion to the term, LF= equality would not be an adequate representation of judgmental equality. (The adequacy of reflective equality is inspired by Andromeda.)
For an example of why using equality reflection in the framework doesn’t automatically make the object language extensional, we will cover the non-derivability of () in intensional type theory. So we add natural numbers:
nat : tp
zro : el nat
suc : el nat -> el nat
natrec : ...
natrec_zro : ...
natrec_suc : ...
natrec
should be the dependent elimination form for the inductive type of natural numbers. natrec_zro
and natrec_suc
are its defining equations.
We can then define—in the metalanguage—a derived form
plus : el nat -> el nat -> el nat
using natrec
. We will write (plus n m
) as (n + m
).
The judgment () is represented as ({n:el nat} {m:el nat} n + m = m + n
). There is “obviously” no LF= term of that type, since there’s no induction principle for the LF= type el nat
, and the only interesting equations about el nat
don’t apply for a general element. The object language has natrec
as an induction principle, but that only works for proving object language types, not LF= equality.
Of course, if we add reflective equality to the object language, then we should be able to combine natrec
and the object language’s equality reflection to prove the LF= equation. Here is a reflective equality type constructor:
eq : {T:tp} el T -> el T -> tp
eqI : {T:tp} {t:el T} el (eq T t t)
eqE : {T:tp} {t1 t2:el T} el (eq T t1 t2) -> t1 = t2
eqK : {T:tp} {t:el T} {p:el (eq T t t)} p = eqI T t
eqE
is the equality reflection rule for the object language. Note that there’s no trouble declaring this constant. It’s just another judgmental equality rule for the object language. LF= was prepared all along for equality requiring proof. On the other hand, that means that it requires boring proof in the common case that it’s obvious. So an implementation of LF= would have a practical need for proof automation, to effect type checking of the object language.
Part of the rationale behind CLF—and seemingly many other type system implementations based on synthetic judgments—is that there’s no implementation reason to distinguish type checking from other forms of proof automation. That type checking really is just automatic “proving” of judgments.
On the other hand, if you insist upon decidable type checking, that would be a reason to distinguish type checking from general proof automation. Implementers of synthetic type systems seem united in not considering decidability an important property of type checking.
LF, MLLF, and LF= are all specialized, weak languages, wherein the object language is specified as a list of assumptions. CLF is not. Although it’s still fairly weak, it’s supposed to be strong enough to actually prove stuff. In CLF, the object language is defined as some number of mutually inductive-inductive families.
Declared LF type families correspond to CLF inductive type family definitions. The declared element formers for declared LF families correspond to the constructors of the CLF inductive families. Since multiple families can be declared in LF prior to declaring elements, the CLF inductive families are mutually defined, in general. And since there can also be type dependencies between the families, induction-induction is generally necessary.
LF signatures are not ordinary inductive-inductive families. Although the constructors can take functions as arguments, this does not denote an infinitary constructor, as found in W-types. Instead, a constructor applied to a function argument denotes an expression with a binding subexpression. This is the HOAS interpretation of function types: applying a function just substitutes into an open expression. Since binding and substitution are rather tame notions from formal syntax, there are no restrictions on recurrences in the inductive definition. For example, a recurrence is allowed in the domain of a function type, unlike with infinitary inductive types.
HOAS should be used in CLF too. But unlike LF, this doesn’t involve function types, since CLF’s maps out of an inductive definition allow case analysis, which would not be an adequate representation of binding subexpressions. Instead, lambdas from can be used as the representation of binding subexpressions. In the family definitions, free variables will presumably be kept track of explicitly, as usual for an inductive definition of a formal system. It should be possible to develop some generic definition of the families, for some broad type of signature codes. (It should include at least all LF= signatures. See below.)
In other words, in CLF, the HOAS interpretation of signatures should be definable; it’s not just part of the metatheory. This should allow CLF to serve as both a logical framework, and a metalanguage for reasoning about derivability. With sufficient proof automation, reasoning about derivability allows the convenient use of object languages. The shallowness of the encoding avoids much of the notational inconvenience and computational overhead usually present when reasoning via a metalanguage definition.
The addition of reflective equality types to LF= has the consequence that LF= signatures correspond to quotient inductive-inductive families, not just plain inductive-inductive families. The declarations concluding with an equality type correspond to equality constructors.
In CLF, quotienting is actually needed just to define inductive-inductive families, since lambdas in don’t already satisfy the eta conversion rule of LF’s function type. So in principle, supporting quotient inductive-inductive families should be a straightforward modification to the construction. Due to the use of refinement-style quotients, quotienting the inductive-inductive families doesn’t make the encoding any deeper.
Abhishek Anand, Vincent Rahli, Towards a Formally Verified Proof Assistant, Interactive Theorem Proving (ITP) 2014 (project web, paper web, pdf)
Abhishek Anand, Mark Bickford, Robert L. Constable, Vincent Rahli, A Type Theory with Partial Equivalence Relations as Types, Types for Proofs and Programs (TYPES) 2014 (slides, web, pdf)
Martin Hofmann, Extensional concepts in intensional type theory, Ph.D. dissertation, University of Edinburgh (1995). (link)
Jonathan Sterling, Robert Harper, Algebraic Foundations of Proof Refinement, 2017 (arXiv)
Andrej Bauer, Gaëtan Gilbert, Philipp G. Haselwarter, Matija Pretnar, Christopher A. Stone, Design and Implementation of the Andromeda Proof Assistant, 2018 (arXiv, project web)
Per Martin-Löf, Analytic and Synthetic Judgments in Type Theory, 1994 (pdf)
Bengt Nordström, Kent Petersson, Jan M. Smith, Programming in Martin-Löf’s Type Theory, Oxford University Press (1990) (web)
Jesper Cockx, Andreas Abel, Sprinkles of extensionality for your vanilla type theory, Types for Proofs and Programs (TYPES) 2016 (pdf)
Last revised on July 11, 2020 at 22:13:36. See the history of this page for a list of all contributions to it.